deep recurrent spiking neural network
Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. However, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unrolling in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs. The proposed ST-RSBP directly computes the gradient of a rated-coded loss function defined at the output layer of the network w.r.t tunable parameters. The scalability of ST-RSBP is achieved by the proposed spike-train level computation during which temporal effects of the SNN is captured in both the forward and backward pass of BP. Our ST-RSBP algorithm can be broadly applied to RSNNs with a single recurrent layer or deep RSNNs with multiple feed-forward and recurrent layers. Based upon challenging speech and image datasets including TI46, N-TIDIGITS, Fashion-MNIST and MNIST, ST-RSBP is able to train RSNNs with an accuracy surpassing that of the current state-of-art SNN BP algorithms and conventional non-spiking deep learning models.
Reviews: Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
Mein Concerns: The main motivation of the paper, to solve Backprop in spiking neurons, is not an open problem in computational neuroscience. In fact, learning in spiking neural networks using standard methods is not a problem at all as recent work shows. It has been demonstrated multiple times that Backprop can be applied without much changes by applying pseudo-derivatives to circumvent the non-differentiable spikes. This works very well in practice and scales up to midscale benchmark problems (and possibly beyond) without performance loss compared to classical (analog) neural networks. In this context it hard to pinpoint the main innovation of the manuscript.
Reviews: Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
The authors propose a variant of the backpropagation through time (BPTT) algorithm for spiking neural networks (SNNs). An interesting aspect is that, instead of unrolling the network computation over time, backpropagation over spike trains is performed. The algorithm is tested on various datasets, achieving state-of-the-art results for SNNs. The approach is very original and innovative. The results are very good and of interest for the community interested in spiking neural networks.
Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. However, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unrolling in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs.
Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. However, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unrolling in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs.